21 research outputs found

    Business Inferences and Risk Modeling with Machine Learning; The Case of Aviation Incidents

    Get PDF
    Machine learning becomes truly valuable only when decision-makers begin to depend on it to optimize decisions. Instilling trust in machine learning is critical for businesses in their efforts to interpret and get insights into data, and to make their analytical choices accessible and subject to accountability. In the field of aviation, the innovative application of machine learning and analytics can facilitate an understanding of the risk of accidents and other incidents. These occur infrequently, generally in an irregular, unpredictable manner, and cause significant disruptions, and hence, they are classified as "high-impact, low-probability" (HILP) events. Aviation incident reports are inspected by experts, but it is also important to have a comprehensive overview of incidents and their holistic effects. This study provides an interpretable machine-learning framework for predicting aircraft damage. In addition, it describes patterns of flight specifications detected through the use of a simulation tool and illuminates the underlying reasons for specific aviation accidents. As a result, we can predict the aircraft damage with 85% accuracy and 84% in-class accuracy. Most important, we simulate a combination of possible flight-type, aircraft-type, and pilot-expertise combinations to arrive at insights, and we recommend actions that can be taken by aviation stakeholders, such as airport managers, airlines, flight training companies, and aviation policy makers. In short, we combine predictive results with simulations to interpret findings and prescribe actions

    Business Inferences and Risk Modeling with Machine Learning; The Case of Aviation Incidents

    Get PDF
    Machine learning becomes truly valuable only when decision-makers begin to depend on it to optimize decisions. Instilling trust in machine learning is critical for businesses in their efforts to interpret and get insights into data, and to make their analytical choices accessible and subject to accountability. In the field of aviation, the innovative application of machine learning and analytics can facilitate an understanding of the risk of accidents and other incidents. These occur infrequently, generally in an irregular, unpredictable manner, and cause significant disruptions, and hence, they are classified as high-impact, low-probability (HILP) events. Aviation incident reports are inspected by experts, but it is also important to have a comprehensive overview of incidents and their holistic effects. This study provides an interpretable machine-learning framework for predicting aircraft damage. In addition, it describes patterns of flight specifications detected through the use of a simulation tool and illuminates the underlying reasons for specific aviation accidents. As a result, we can predict the aircraft damage with 85% accuracy and 84% in-class accuracy. Most important, we simulate a combination of possible flight-type, aircraft-type, and pilot-expertise combinations to arrive at insights, and we recommend actions that can be taken by aviation stakeholders, such as airport managers, airlines, flight training companies, and aviation policy makers. In short, we combine predictive results with simulations to interpret findings and prescribe actions

    Evidence-Based Managerial Decision-Making With Machine Learning: The Case of Bayesian Inference in Aviation Incidents

    Get PDF
    Understanding the factors behind aviation incidents is essential, not only because of the lethality of the accidents but also the incidents’ direct and indirect economic impact. Even minor incidents trigger significant economic damage and create disruptions to aviation operations. It is crucial to investigate these incidents to understand the underlying reasons and hence, reduce the risk associated with physical and financial safety in a precarious industry like aviation. The findings may provide decision-makers with a causally accurate means of investigating the topic while untangling the difficulties concerning the statistical associations and causal effects. This research aims to identify the significant variables and their probabilistic dependencies/relationships determining the degree of aircraft damage. The value and the contribution of this study include (1) developing a fully automatic ML prediction based DSS for aircraft damage severity, (2) conducting a deep network analysis of affinity between predicting variables using probabilistic graphical modeling (PGM), and (3) implementing a user-friendly dashboard to interpret the business insight coming from the design and development of the Bayesian Belief Network (BBN). By leveraging a large, real-world dataset, the proposed methodology captures the probability-based interrelations among air terminal, flight, flight crew, and air-vehicle-related characteristics as explanatory variables, thereby revealing the underlying, complex interactions in accident severity. This research contributes significantly to the current body of knowledge by defining and proving a methodology for automatically categorizing aircraft damage severity based on flight, aircraft, and PIC (pilot in command) information. Moreover, the study combines the findings of the Bayesian Belief Networks with decades of aviation expertise of the subject matter expert, drawing and explaining the association map to find the root causes of the problems and accident relayed variables

    MDSCAN: AN EXPLAINABLE ARTIFICIAL INTELLIGENCE ARTIFACT FOR MENTAL HEALTH SCREENING

    Get PDF
    This paper presents a novel artifact called MDscan that can help mental health professionals quickly screen a large number of patients for ten mental disorders. MDscan uses patient responses to the SCL-90-R clinical questionnaire to create a full-color image, similar to radiological images, which identifies which disorder or combination of disorders may afflict a patient, the severity of the disorder, and the underlying logic of this prediction, using an explainable artificial intelligence (XAI) approach. While prior artificial intelligence (AI) tools have seen limited acceptance in clinical practice because of the lack of transparency and interpretability in their black box models, the XAI approach used in MDscan is a white box model that elaborates which patient feature contributes to the predicted outcome and to what extent. Using patient data from a mental health clinic, we demonstrate that MDscan outperforms current (expert-based) clinical practice by an average of 20%

    Disentangling human trafficking types and the identification of pathways to forced labor and sex: an explainable analytics approach

    No full text
    Terms such as human trafficking and modern-day slavery are ephemeral but reflect manifestations of oppression, servitude, and captivity that perpetually have threatened the basic right of all humans. Operations research and analytical tools offering practical wisdom have paid scant attention to this overarching problem. Motivated by this lacuna, this study considers two of the most prevalent categories of human trafficking: forced labor and forced sex. Using one of the largest available datasets due to Counter-Trafficking Data Collective (CTDC), we examine patterns related to forced sex and forced labor. Our study uses a two-phase approach focusing on explainability: Phase 1 involves logistic regression (LR) segueing to association rules analysis and Phase 2 employs Bayesian Belief Networks (BBNs) to uncover intricate pathways leading to human trafficking. This combined approach provides a comprehensive understanding of the factors contributing to human trafficking, effectively addressing the limitations of conventional methods. We confirm and challenge some of the key findings in the extant literature and call for better prevention strategies. Our study goes beyond the pretext of analytics usage by prescribing how to incorporate our results in combating human trafficking

    What Makes Accidents Severe! Explainable Analytics Framework with Parameter Optimization

    No full text
    Highlights Holistic XAN model combines descriptive, predictive, prescriptive analytics. Cutting-edge techniques for feature selection, optimization, and explanations. Transparent justifications for factors enhance trust, and assist domain experts. Interpretable representations assist in intelligent decision-making. Abstract Most analytics models are built on complex internal learning processes and calculations, which might be unintuitive, opaque, and incomprehensible to humans. Analytics-based decisions must be transparent and intuitive to foster greater human acceptability and confidence in analytics. Explainable analytics models are transparent models in which the primary factors and weights that lead to a prediction can be explained. Typical AI models are non-transparent or opaque models, in which even the designers cannot explain how their models arrive at a specific decision. These transparent models help decision-makers understand their judgments and build trust in analytics. This study introduces an innovative, comprehensive model that fuses descriptive, predictive, and prescriptive analytics, offering a fresh perspective on car accident severity. Our methodological contribution lies in the application of advanced techniques to address data-related challenges, optimize feature selection, develop predictive models, and fine-tune parameters. Importantly, we also incorporate model-agnostic interpretation techniques, further enhancing the transparency and interpretability of our model, and separate explanations from models (i.e., model-agnostic interpretation techniques). Our findings should provide novel insights for a domain expert to understand accident severity. The explainable analytics approach suggested in this study supplements non-transparent machine learning prediction models, particularly optimized ensemble models. Our model\u27s end product is a comprehensible representation of crash severity factors. To obtain a more trustworthy assessment of accident severity, this model may be supplemented with insurance data, medical data such as blood work and pulse rate, and previous medical history

    Investigating injury severity risk factors in automobile crashes with predictive analytics and sensitivity analysis methods

    No full text
    Investigation of the risk factors that contribute to the injury severity in motor vehicle crashes has proved to be a thought-provoking and challenging problem. The results of such investigation can help better understand and potentially mitigate the severe injury risks involved in automobile crashes and thereby advance the well-being of people involved in these traffic accidents. Many factors were found to have an impact on the severity of injury sustained by occupants in the event of an automobile accident. In this analytics study we used a large and feature-rich crash dataset along with a number of predictive analytics algorithms to model the complex relationships between varying levels of injury severity and the crash related risk factors. Applying a systematic series of information fusion-based sensitivity analysis on the trained predictive models we identified the relative importance of the crash related risk factors. The results provided invaluable insights for the use of predictive analytics in this domain and exposed the relative importance of crash related risk factors with the changing levels of injury severity
    corecore